Character-Level LSTM in PyTorch

In this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. This model will be able to generate new text based on the text from the book!

This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Below is the general architecture of the character-wise RNN.

First let's load in our required resources for data loading and model creation.


In [1]:
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F

Load in Data

Then, we'll load the Anna Karenina text file and convert it into integers for our network to use.


In [2]:
# Open text file and read in data as `text`
with open('data/anna.txt', 'r') as f:
    text = f.read()

Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.


In [3]:
text[:100]


Out[3]:
'Chapter 1\n\n\nHappy families are all alike; every unhappy family is unhappy in its own\nway.\n\nEverythin'

Tokenization

In the cells, below, I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.


In [4]:
# Encode the text and map each character to an integer and vice versa

# We create two dictionaries:
# 1. int2char, which maps integers to characters
# 2. char2int, which maps characters to unique integers
chars = tuple(set(text))
int2char = dict(enumerate(chars))
char2int = {ch: ii for ii, ch in int2char.items()}

# Encode the text
encoded = np.array([char2int[ch] for ch in text])

And we can see those same characters from above, encoded as integers.


In [5]:
encoded[:100]


Out[5]:
array([65, 73, 78, 12, 34, 63, 32, 35,  1, 70, 70, 70, 25, 78, 12, 12, 22,
       35, 37, 78, 21, 17, 77, 17, 63, 41, 35, 78, 32, 63, 35, 78, 77, 77,
       35, 78, 77, 17, 36, 63, 18, 35, 63, 14, 63, 32, 22, 35, 38, 46, 73,
       78, 12, 12, 22, 35, 37, 78, 21, 17, 77, 22, 35, 17, 41, 35, 38, 46,
       73, 78, 12, 12, 22, 35, 17, 46, 35, 17, 34, 41, 35, 58,  9, 46, 70,
        9, 78, 22, 26, 70, 70, 20, 14, 63, 32, 22, 34, 73, 17, 46])

Pre-processing the data

As you can see in our char-RNN image above, our LSTM expects an input that is one-hot encoded meaning that each character is converted into an integer (via our created dictionary) and then converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!


In [6]:
def one_hot_encode(arr, n_labels):
    
    # Initialize the the encoded array
    one_hot = np.zeros((np.multiply(*arr.shape), n_labels),
                       dtype=np.float32)
    
    # Fill the appropriate elements with ones
    one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.
    
    # Finally reshape it to get back to the original array
    one_hot = one_hot.reshape((*arr.shape, n_labels))
    
    return one_hot

In [7]:
# Check that the function works as expected
test_seq = np.array([[3, 5, 1]])
one_hot = one_hot_encode(test_seq, 8)

print(one_hot)


[[[0. 0. 0. 1. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 1. 0. 0.]
  [0. 1. 0. 0. 0. 0. 0. 0.]]]

Making training mini-batches

To train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:


In this example, we'll take the encoded characters (passed in as the arr parameter) and split them into multiple sequences, given by batch_size. Each of our sequences will be seq_length long.

Creating Batches

1. The first thing we need to do is discard some of the text so we only have completely full mini-batches.

Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array arr, you divide the length of arr by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from arr, $N * M * K$.

2. After that, we need to split arr into $N$ batches.

You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.

3. Now that we have this array, we can iterate through it to get our mini-batches.

The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by seq_length. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of tokens in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is seq_length wide.

TODO: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.


In [8]:
def get_batches(arr, batch_size, seq_length):
    '''Create a generator that returns batches of size
       batch_size x seq_length from arr.
       
       Arguments
       ---------
       arr: Array you want to make batches from
       batch_size: Batch size, the number of sequences per batch
       seq_length: Number of encoded chars in a sequence
    '''
    
    ## TODO: Get the number of batches we can make
    n_batches = len(arr) // (batch_size * seq_length)
    
    ## TODO: Keep only enough characters to make full batches
    arr = arr[:(n_batches * batch_size * seq_length)]
    
    ## TODO: Reshape into batch_size rows
    arr = arr.reshape((batch_size, -1))
    
    ## TODO: Iterate over the batches using a window of size seq_length
    for n in range(0, arr.shape[1], seq_length):
        # The features
        x = arr[:, n:(n + seq_length)]
        # The targets, shifted by one
        y = np.zeros_like(x)
        try:
            y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n + seq_length]
        except IndexError:
            y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]
        yield x, y

Test Your Implementation

Now I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps.


In [9]:
batches = get_batches(encoded, 8, 50)
x, y = next(batches)

In [10]:
# printing out the first 10 items in a sequence
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])


x
 [[65 73 78 12 34 63 32 35  1 70]
 [41 58 46 35 34 73 78 34 35 78]
 [63 46 28 35 58 32 35 78 35 37]
 [41 35 34 73 63 35 10 73 17 63]
 [35 41 78  9 35 73 63 32 35 34]
 [10 38 41 41 17 58 46 35 78 46]
 [35 48 46 46 78 35 73 78 28 35]
 [ 4 82 77 58 46 41 36 22 26 35]]

y
 [[73 78 12 34 63 32 35  1 70 70]
 [58 46 35 34 73 78 34 35 78 34]
 [46 28 35 58 32 35 78 35 37 58]
 [35 34 73 63 35 10 73 17 63 37]
 [41 78  9 35 73 63 32 35 34 63]
 [38 41 41 17 58 46 35 78 46 28]
 [48 46 46 78 35 73 78 28 35 41]
 [82 77 58 46 41 36 22 26 35 29]]

If you implemented get_batches correctly, the above output should look something like

x
 [[25  8 60 11 45 27 28 73  1  2]
 [17  7 20 73 45  8 60 45 73 60]
 [27 20 80 73  7 28 73 60 73 65]
 [17 73 45  8 27 73 66  8 46 27]
 [73 17 60 12 73  8 27 28 73 45]
 [66 64 17 17 46  7 20 73 60 20]
 [73 76 20 20 60 73  8 60 80 73]
 [47 35 43  7 20 17 24 50 37 73]]

y
 [[ 8 60 11 45 27 28 73  1  2  2]
 [ 7 20 73 45  8 60 45 73 60 45]
 [20 80 73  7 28 73 60 73 65  7]
 [73 45  8 27 73 66  8 46 27 65]
 [17 60 12 73  8 27 28 73 45 27]
 [64 17 17 46  7 20 73 60 20 80]
 [76 20 20 60 73  8 60 80 73 17]
 [35 43  7 20 17 24 50 37 73 36]]

although the exact numbers may be different. Check to make sure the data is shifted over one step for y.


Defining the network with PyTorch

Below is where you'll define the network.

Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters.

Model Structure

In __init__ the suggested structure is as follows:

  • Create and store the necessary dictionaries (this has been done for you)
  • Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size n_hidden, a number of layers n_layers, a dropout probability drop_prob, and a batch_first boolean (True, since we are batching)
  • Define a dropout layer with dropout_prob
  • Define a fully-connected layer with params: input size n_hidden and output size (the number of characters)
  • Finally, initialize the weights (again, this has been given)

Note that some parameters have been named and given in the __init__ function, and we use them and store them by doing something like self.drop_prob = drop_prob.


LSTM Inputs/Outputs

You can create a basic LSTM layer as follows

self.lstm = nn.LSTM(input_size, n_hidden, n_layers, 
                            dropout=drop_prob, batch_first=True)

where input_size is the number of characters this cell expects to see as sequential input, and n_hidden is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the forward function, we can stack up the LSTM cells into layers using .view. With this, you pass in a list of cells and it will send the output of one cell into the next cell.

We also need to create an initial hidden state of all zeros. This is done like so

self.init_hidden()

In [11]:
# check if GPU is available
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
    print('Training on GPU!')
else: 
    print('No GPU available, training on CPU; consider making n_epochs very small.')


Training on GPU!

In [12]:
class CharRNN(nn.Module):
    
    def __init__(self,
                 tokens,
                 n_hidden=256,
                 n_layers=2,
                 drop_prob=0.5,
                 lr=0.001):
        
        super().__init__()
        self.drop_prob = drop_prob
        self.n_layers = n_layers
        self.n_hidden = n_hidden
        self.lr = lr
        
        # creating character dictionaries
        self.chars = tokens
        self.int2char = dict(enumerate(self.chars))
        self.char2int = {ch: ii for ii, ch in self.int2char.items()}
        
        ## TODO: define the layers of the model
        self.lstm = nn.LSTM(input_size=len(self.chars),
                            hidden_size=n_hidden,
                            num_layers=n_layers,
                            dropout=drop_prob,
                            batch_first=True)
        ## Define dropout
        self.dropout = nn.Dropout(drop_prob)
        ## Define the final fully-connected layer
        self.fc_out = nn.Linear(in_features=n_hidden,
                                out_features=len(self.chars))
    
    def forward(self, x, hidden):
        ''' Forward pass through the network. 
            These inputs are x, and the hidden/cell state `hidden`. '''
                
        ## TODO: Get the outputs and the new hidden state from the lstm
        lstm_out, hidden = self.lstm(x, hidden)
        
        after_dropout = self.dropout(lstm_out)
        
        # Reshaping the data
        reshaped = after_dropout.contiguous().view(-1, self.n_hidden)
        
        # Return the final output and the hidden state
        out = self.fc_out(reshaped)
        return out, hidden
    
    
    def init_hidden(self, batch_size):
        ''' Initializes hidden state '''
        # Create two new tensors with sizes n_layers x batch_size x n_hidden,
        # initialized to zero, for hidden state and cell state of LSTM
        weight = next(self.parameters()).data
        
        if (train_on_gpu):
            hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),
                  weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())
        else:
            hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),
                      weight.new(self.n_layers, batch_size, self.n_hidden).zero_())
        
        return hidden

Time to train

The train function gives us the ability to set the number of epochs, the learning rate, and other parameters.

Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!

A couple of details about training:

  • Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new tuple variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.
  • We use clip_grad_norm_ to help prevent exploding gradients.

In [13]:
def train(net, data, epochs=10, batch_size=10, seq_length=50, lr=0.001, clip=5, val_frac=0.1, print_every=10):
    ''' Training a network 
    
        Arguments
        ---------
        
        net: CharRNN network
        data: text data to train the network
        epochs: Number of epochs to train
        batch_size: Number of mini-sequences per mini-batch, aka batch size
        seq_length: Number of character steps per mini-batch
        lr: learning rate
        clip: gradient clipping
        val_frac: Fraction of data to hold out for validation
        print_every: Number of steps for printing training and validation loss
    
    '''
    net.train()
    
    opt = torch.optim.Adam(params=net.parameters(),
                           lr=lr)
    criterion = nn.CrossEntropyLoss()
    
    # Create training and validation data
    val_idx = int(len(data)*(1-val_frac))
    data, val_data = data[:val_idx], data[val_idx:]
    
    if(train_on_gpu):
        net.cuda()
    
    counter = 0
    n_chars = len(net.chars)
    for e in range(epochs):
        # Initialize hidden state
        h = net.init_hidden(batch_size)
        
        for x, y in get_batches(data, batch_size, seq_length):
            counter += 1
            
            # One-hot encode our data and make them Torch tensors
            x = one_hot_encode(x, n_chars)
            inputs, targets = torch.from_numpy(x), torch.from_numpy(y).type(torch.LongTensor)
            
            if(train_on_gpu):
                inputs, targets = inputs.cuda(), targets.cuda()

            # Creating new variables for the hidden state, otherwise
            # we'd backprop through the entire training history
            h = tuple([each.data for each in h])

            # Zero accumulated gradients
            net.zero_grad()
            
            # Get the output from the model
            output, h = net(inputs, h)
            
            # Calculate the loss and perform backprop
            loss = criterion(output, targets.view(batch_size * seq_length))
            loss.backward()
            # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
            nn.utils.clip_grad_norm_(net.parameters(), clip)
            opt.step()
            
            # loss stats
            if counter % print_every == 0:
                # Get validation loss
                val_h = net.init_hidden(batch_size)
                val_losses = []
                net.eval()
                for x, y in get_batches(val_data, batch_size, seq_length):
                    # One-hot encode our data and make them Torch tensors
                    x = one_hot_encode(x, n_chars)
                    x, y = torch.from_numpy(x), torch.from_numpy(y).type(torch.LongTensor)
                    
                    # Creating new variables for the hidden state, otherwise
                    # we'd backprop through the entire training history
                    val_h = tuple([each.data for each in val_h])
                    
                    inputs, targets = x, y
                    if(train_on_gpu):
                        inputs, targets = inputs.cuda(), targets.cuda()

                    output, val_h = net(inputs, val_h)
                    val_loss = criterion(output, targets.view(batch_size * seq_length))
                
                    val_losses.append(val_loss.item())
                
                # Reset to train mode after iterationg through validation data
                net.train() 
                
                print("Epoch: {}/{}...".format(e+1, epochs),
                      "Step: {}...".format(counter),
                      "Loss: {:.4f}...".format(loss.item()),
                      "Val Loss: {:.4f}".format(np.mean(val_losses)))

Instantiating the model

Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!


In [14]:
## TODO: set you model hyperparameters
# Define and print the net
n_hidden = 256
n_layers = 2

net = CharRNN(chars, n_hidden, n_layers)
print(net)


CharRNN(
  (lstm): LSTM(83, 256, num_layers=2, batch_first=True, dropout=0.5)
  (dropout): Dropout(p=0.5)
  (fc_out): Linear(in_features=256, out_features=83, bias=True)
)

Set your training hyperparameters!


In [15]:
batch_size = 128
seq_length = 100
# Start small if you are just testing initial behavior
n_epochs = 20

# Train the model
train(net, encoded, epochs=n_epochs,
      batch_size=batch_size, seq_length=seq_length,
      lr=0.001, print_every=10)


Epoch: 1/20... Step: 10... Loss: 3.3581... Val Loss: 3.2264
Epoch: 1/20... Step: 20... Loss: 3.1756... Val Loss: 3.1289
Epoch: 1/20... Step: 30... Loss: 3.1677... Val Loss: 3.1264
Epoch: 1/20... Step: 40... Loss: 3.1317... Val Loss: 3.1216
Epoch: 1/20... Step: 50... Loss: 3.1618... Val Loss: 3.1198
Epoch: 1/20... Step: 60... Loss: 3.1285... Val Loss: 3.1183
Epoch: 1/20... Step: 70... Loss: 3.1211... Val Loss: 3.1172
Epoch: 1/20... Step: 80... Loss: 3.1354... Val Loss: 3.1164
Epoch: 1/20... Step: 90... Loss: 3.1315... Val Loss: 3.1139
Epoch: 1/20... Step: 100... Loss: 3.1243... Val Loss: 3.1107
Epoch: 1/20... Step: 110... Loss: 3.1220... Val Loss: 3.1034
Epoch: 1/20... Step: 120... Loss: 3.0927... Val Loss: 3.0913
Epoch: 1/20... Step: 130... Loss: 3.0898... Val Loss: 3.0675
Epoch: 2/20... Step: 140... Loss: 3.0531... Val Loss: 3.0249
Epoch: 2/20... Step: 150... Loss: 2.9908... Val Loss: 2.9563
Epoch: 2/20... Step: 160... Loss: 2.8979... Val Loss: 2.8621
Epoch: 2/20... Step: 170... Loss: 2.7894... Val Loss: 2.7710
Epoch: 2/20... Step: 180... Loss: 2.7215... Val Loss: 2.6942
Epoch: 2/20... Step: 190... Loss: 2.6645... Val Loss: 2.6369
Epoch: 2/20... Step: 200... Loss: 2.6623... Val Loss: 2.5881
Epoch: 2/20... Step: 210... Loss: 2.5980... Val Loss: 2.5538
Epoch: 2/20... Step: 220... Loss: 2.5665... Val Loss: 2.5248
Epoch: 2/20... Step: 230... Loss: 2.5448... Val Loss: 2.5026
Epoch: 2/20... Step: 240... Loss: 2.5301... Val Loss: 2.4749
Epoch: 2/20... Step: 250... Loss: 2.4791... Val Loss: 2.4537
Epoch: 2/20... Step: 260... Loss: 2.4594... Val Loss: 2.4301
Epoch: 2/20... Step: 270... Loss: 2.4536... Val Loss: 2.4117
Epoch: 3/20... Step: 280... Loss: 2.4532... Val Loss: 2.3981
Epoch: 3/20... Step: 290... Loss: 2.4295... Val Loss: 2.3805
Epoch: 3/20... Step: 300... Loss: 2.4218... Val Loss: 2.3647
Epoch: 3/20... Step: 310... Loss: 2.4077... Val Loss: 2.3598
Epoch: 3/20... Step: 320... Loss: 2.3943... Val Loss: 2.3399
Epoch: 3/20... Step: 330... Loss: 2.3611... Val Loss: 2.3280
Epoch: 3/20... Step: 340... Loss: 2.3705... Val Loss: 2.3147
Epoch: 3/20... Step: 350... Loss: 2.3604... Val Loss: 2.2972
Epoch: 3/20... Step: 360... Loss: 2.3019... Val Loss: 2.2862
Epoch: 3/20... Step: 370... Loss: 2.3362... Val Loss: 2.2731
Epoch: 3/20... Step: 380... Loss: 2.3065... Val Loss: 2.2609
Epoch: 3/20... Step: 390... Loss: 2.2980... Val Loss: 2.2471
Epoch: 3/20... Step: 400... Loss: 2.2715... Val Loss: 2.2340
Epoch: 3/20... Step: 410... Loss: 2.2781... Val Loss: 2.2255
Epoch: 4/20... Step: 420... Loss: 2.2572... Val Loss: 2.2130
Epoch: 4/20... Step: 430... Loss: 2.2508... Val Loss: 2.1961
Epoch: 4/20... Step: 440... Loss: 2.2470... Val Loss: 2.1888
Epoch: 4/20... Step: 450... Loss: 2.1846... Val Loss: 2.1787
Epoch: 4/20... Step: 460... Loss: 2.2082... Val Loss: 2.1659
Epoch: 4/20... Step: 470... Loss: 2.2104... Val Loss: 2.1543
Epoch: 4/20... Step: 480... Loss: 2.2044... Val Loss: 2.1452
Epoch: 4/20... Step: 490... Loss: 2.1995... Val Loss: 2.1380
Epoch: 4/20... Step: 500... Loss: 2.1940... Val Loss: 2.1271
Epoch: 4/20... Step: 510... Loss: 2.1937... Val Loss: 2.1210
Epoch: 4/20... Step: 520... Loss: 2.1962... Val Loss: 2.1129
Epoch: 4/20... Step: 530... Loss: 2.1510... Val Loss: 2.1007
Epoch: 4/20... Step: 540... Loss: 2.1206... Val Loss: 2.0934
Epoch: 4/20... Step: 550... Loss: 2.1430... Val Loss: 2.0826
Epoch: 5/20... Step: 560... Loss: 2.1308... Val Loss: 2.0739
Epoch: 5/20... Step: 570... Loss: 2.1204... Val Loss: 2.0689
Epoch: 5/20... Step: 580... Loss: 2.0987... Val Loss: 2.0606
Epoch: 5/20... Step: 590... Loss: 2.1043... Val Loss: 2.0509
Epoch: 5/20... Step: 600... Loss: 2.0831... Val Loss: 2.0440
Epoch: 5/20... Step: 610... Loss: 2.0814... Val Loss: 2.0393
Epoch: 5/20... Step: 620... Loss: 2.0677... Val Loss: 2.0385
Epoch: 5/20... Step: 630... Loss: 2.1012... Val Loss: 2.0254
Epoch: 5/20... Step: 640... Loss: 2.0735... Val Loss: 2.0162
Epoch: 5/20... Step: 650... Loss: 2.0559... Val Loss: 2.0102
Epoch: 5/20... Step: 660... Loss: 2.0224... Val Loss: 2.0031
Epoch: 5/20... Step: 670... Loss: 2.0687... Val Loss: 1.9958
Epoch: 5/20... Step: 680... Loss: 2.0571... Val Loss: 1.9915
Epoch: 5/20... Step: 690... Loss: 2.0211... Val Loss: 1.9870
Epoch: 6/20... Step: 700... Loss: 2.0147... Val Loss: 1.9756
Epoch: 6/20... Step: 710... Loss: 2.0149... Val Loss: 1.9671
Epoch: 6/20... Step: 720... Loss: 2.0033... Val Loss: 1.9615
Epoch: 6/20... Step: 730... Loss: 2.0014... Val Loss: 1.9580
Epoch: 6/20... Step: 740... Loss: 1.9889... Val Loss: 1.9496
Epoch: 6/20... Step: 750... Loss: 1.9625... Val Loss: 1.9434
Epoch: 6/20... Step: 760... Loss: 1.9960... Val Loss: 1.9411
Epoch: 6/20... Step: 770... Loss: 1.9782... Val Loss: 1.9324
Epoch: 6/20... Step: 780... Loss: 1.9698... Val Loss: 1.9258
Epoch: 6/20... Step: 790... Loss: 1.9717... Val Loss: 1.9226
Epoch: 6/20... Step: 800... Loss: 1.9581... Val Loss: 1.9134
Epoch: 6/20... Step: 810... Loss: 1.9564... Val Loss: 1.9084
Epoch: 6/20... Step: 820... Loss: 1.9429... Val Loss: 1.9034
Epoch: 6/20... Step: 830... Loss: 1.9680... Val Loss: 1.8978
Epoch: 7/20... Step: 840... Loss: 1.9154... Val Loss: 1.8905
Epoch: 7/20... Step: 850... Loss: 1.9249... Val Loss: 1.8837
Epoch: 7/20... Step: 860... Loss: 1.9285... Val Loss: 1.8800
Epoch: 7/20... Step: 870... Loss: 1.9207... Val Loss: 1.8750
Epoch: 7/20... Step: 880... Loss: 1.9179... Val Loss: 1.8688
Epoch: 7/20... Step: 890... Loss: 1.9274... Val Loss: 1.8640
Epoch: 7/20... Step: 900... Loss: 1.8930... Val Loss: 1.8577
Epoch: 7/20... Step: 910... Loss: 1.8869... Val Loss: 1.8549
Epoch: 7/20... Step: 920... Loss: 1.9007... Val Loss: 1.8504
Epoch: 7/20... Step: 930... Loss: 1.8858... Val Loss: 1.8489
Epoch: 7/20... Step: 940... Loss: 1.8818... Val Loss: 1.8400
Epoch: 7/20... Step: 950... Loss: 1.8966... Val Loss: 1.8372
Epoch: 7/20... Step: 960... Loss: 1.8873... Val Loss: 1.8324
Epoch: 7/20... Step: 970... Loss: 1.9005... Val Loss: 1.8264
Epoch: 8/20... Step: 980... Loss: 1.8775... Val Loss: 1.8187
Epoch: 8/20... Step: 990... Loss: 1.8734... Val Loss: 1.8150
Epoch: 8/20... Step: 1000... Loss: 1.8590... Val Loss: 1.8088
Epoch: 8/20... Step: 1010... Loss: 1.8962... Val Loss: 1.8077
Epoch: 8/20... Step: 1020... Loss: 1.8501... Val Loss: 1.8034
Epoch: 8/20... Step: 1030... Loss: 1.8375... Val Loss: 1.7974
Epoch: 8/20... Step: 1040... Loss: 1.8439... Val Loss: 1.7988
Epoch: 8/20... Step: 1050... Loss: 1.8407... Val Loss: 1.7869
Epoch: 8/20... Step: 1060... Loss: 1.8435... Val Loss: 1.7838
Epoch: 8/20... Step: 1070... Loss: 1.8484... Val Loss: 1.7837
Epoch: 8/20... Step: 1080... Loss: 1.8399... Val Loss: 1.7747
Epoch: 8/20... Step: 1090... Loss: 1.8205... Val Loss: 1.7728
Epoch: 8/20... Step: 1100... Loss: 1.8080... Val Loss: 1.7668
Epoch: 8/20... Step: 1110... Loss: 1.8137... Val Loss: 1.7637
Epoch: 9/20... Step: 1120... Loss: 1.8188... Val Loss: 1.7583
Epoch: 9/20... Step: 1130... Loss: 1.8238... Val Loss: 1.7551
Epoch: 9/20... Step: 1140... Loss: 1.8063... Val Loss: 1.7470
Epoch: 9/20... Step: 1150... Loss: 1.8279... Val Loss: 1.7550
Epoch: 9/20... Step: 1160... Loss: 1.7809... Val Loss: 1.7476
Epoch: 9/20... Step: 1170... Loss: 1.7816... Val Loss: 1.7439
Epoch: 9/20... Step: 1180... Loss: 1.7841... Val Loss: 1.7400
Epoch: 9/20... Step: 1190... Loss: 1.8052... Val Loss: 1.7315
Epoch: 9/20... Step: 1200... Loss: 1.7648... Val Loss: 1.7291
Epoch: 9/20... Step: 1210... Loss: 1.7778... Val Loss: 1.7264
Epoch: 9/20... Step: 1220... Loss: 1.7669... Val Loss: 1.7226
Epoch: 9/20... Step: 1230... Loss: 1.7577... Val Loss: 1.7231
Epoch: 9/20... Step: 1240... Loss: 1.7506... Val Loss: 1.7133
Epoch: 9/20... Step: 1250... Loss: 1.7510... Val Loss: 1.7135
Epoch: 10/20... Step: 1260... Loss: 1.7698... Val Loss: 1.7054
Epoch: 10/20... Step: 1270... Loss: 1.7531... Val Loss: 1.7060
Epoch: 10/20... Step: 1280... Loss: 1.7741... Val Loss: 1.6971
Epoch: 10/20... Step: 1290... Loss: 1.7509... Val Loss: 1.6954
Epoch: 10/20... Step: 1300... Loss: 1.7415... Val Loss: 1.6986
Epoch: 10/20... Step: 1310... Loss: 1.7581... Val Loss: 1.6911
Epoch: 10/20... Step: 1320... Loss: 1.7243... Val Loss: 1.6924
Epoch: 10/20... Step: 1330... Loss: 1.7399... Val Loss: 1.6846
Epoch: 10/20... Step: 1340... Loss: 1.7255... Val Loss: 1.6822
Epoch: 10/20... Step: 1350... Loss: 1.7156... Val Loss: 1.6785
Epoch: 10/20... Step: 1360... Loss: 1.7237... Val Loss: 1.6798
Epoch: 10/20... Step: 1370... Loss: 1.6998... Val Loss: 1.6747
Epoch: 10/20... Step: 1380... Loss: 1.7428... Val Loss: 1.6664
Epoch: 10/20... Step: 1390... Loss: 1.7379... Val Loss: 1.6678
Epoch: 11/20... Step: 1400... Loss: 1.7305... Val Loss: 1.6613
Epoch: 11/20... Step: 1410... Loss: 1.7423... Val Loss: 1.6589
Epoch: 11/20... Step: 1420... Loss: 1.7331... Val Loss: 1.6549
Epoch: 11/20... Step: 1430... Loss: 1.6902... Val Loss: 1.6560
Epoch: 11/20... Step: 1440... Loss: 1.7459... Val Loss: 1.6542
Epoch: 11/20... Step: 1450... Loss: 1.6636... Val Loss: 1.6478
Epoch: 11/20... Step: 1460... Loss: 1.6860... Val Loss: 1.6500
Epoch: 11/20... Step: 1470... Loss: 1.6855... Val Loss: 1.6471
Epoch: 11/20... Step: 1480... Loss: 1.7037... Val Loss: 1.6436
Epoch: 11/20... Step: 1490... Loss: 1.6899... Val Loss: 1.6385
Epoch: 11/20... Step: 1500... Loss: 1.6653... Val Loss: 1.6404
Epoch: 11/20... Step: 1510... Loss: 1.6568... Val Loss: 1.6381
Epoch: 11/20... Step: 1520... Loss: 1.7029... Val Loss: 1.6310
Epoch: 12/20... Step: 1530... Loss: 1.7316... Val Loss: 1.6310
Epoch: 12/20... Step: 1540... Loss: 1.6902... Val Loss: 1.6263
Epoch: 12/20... Step: 1550... Loss: 1.7041... Val Loss: 1.6211
Epoch: 12/20... Step: 1560... Loss: 1.7093... Val Loss: 1.6201
Epoch: 12/20... Step: 1570... Loss: 1.6603... Val Loss: 1.6212
Epoch: 12/20... Step: 1580... Loss: 1.6246... Val Loss: 1.6206
Epoch: 12/20... Step: 1590... Loss: 1.6275... Val Loss: 1.6158
Epoch: 12/20... Step: 1600... Loss: 1.6592... Val Loss: 1.6219
Epoch: 12/20... Step: 1610... Loss: 1.6509... Val Loss: 1.6182
Epoch: 12/20... Step: 1620... Loss: 1.6650... Val Loss: 1.6121
Epoch: 12/20... Step: 1630... Loss: 1.6682... Val Loss: 1.6061
Epoch: 12/20... Step: 1640... Loss: 1.6431... Val Loss: 1.6106
Epoch: 12/20... Step: 1650... Loss: 1.6115... Val Loss: 1.6046
Epoch: 12/20... Step: 1660... Loss: 1.6600... Val Loss: 1.5999
Epoch: 13/20... Step: 1670... Loss: 1.6498... Val Loss: 1.6041
Epoch: 13/20... Step: 1680... Loss: 1.6641... Val Loss: 1.5957
Epoch: 13/20... Step: 1690... Loss: 1.6278... Val Loss: 1.5905
Epoch: 13/20... Step: 1700... Loss: 1.6350... Val Loss: 1.5941
Epoch: 13/20... Step: 1710... Loss: 1.6035... Val Loss: 1.5917
Epoch: 13/20... Step: 1720... Loss: 1.6232... Val Loss: 1.5898
Epoch: 13/20... Step: 1730... Loss: 1.6681... Val Loss: 1.5824
Epoch: 13/20... Step: 1740... Loss: 1.6407... Val Loss: 1.5942
Epoch: 13/20... Step: 1750... Loss: 1.5925... Val Loss: 1.5879
Epoch: 13/20... Step: 1760... Loss: 1.6254... Val Loss: 1.5806
Epoch: 13/20... Step: 1770... Loss: 1.6314... Val Loss: 1.5775
Epoch: 13/20... Step: 1780... Loss: 1.6097... Val Loss: 1.5819
Epoch: 13/20... Step: 1790... Loss: 1.5982... Val Loss: 1.5776
Epoch: 13/20... Step: 1800... Loss: 1.6223... Val Loss: 1.5756
Epoch: 14/20... Step: 1810... Loss: 1.6176... Val Loss: 1.5791
Epoch: 14/20... Step: 1820... Loss: 1.6185... Val Loss: 1.5688
Epoch: 14/20... Step: 1830... Loss: 1.6231... Val Loss: 1.5625
Epoch: 14/20... Step: 1840... Loss: 1.5775... Val Loss: 1.5671
Epoch: 14/20... Step: 1850... Loss: 1.5524... Val Loss: 1.5634
Epoch: 14/20... Step: 1860... Loss: 1.6169... Val Loss: 1.5614
Epoch: 14/20... Step: 1870... Loss: 1.6054... Val Loss: 1.5555
Epoch: 14/20... Step: 1880... Loss: 1.6160... Val Loss: 1.5594
Epoch: 14/20... Step: 1890... Loss: 1.6202... Val Loss: 1.5614
Epoch: 14/20... Step: 1900... Loss: 1.6000... Val Loss: 1.5538
Epoch: 14/20... Step: 1910... Loss: 1.6058... Val Loss: 1.5518
Epoch: 14/20... Step: 1920... Loss: 1.5940... Val Loss: 1.5530
Epoch: 14/20... Step: 1930... Loss: 1.5552... Val Loss: 1.5493
Epoch: 14/20... Step: 1940... Loss: 1.6208... Val Loss: 1.5468
Epoch: 15/20... Step: 1950... Loss: 1.5825... Val Loss: 1.5543
Epoch: 15/20... Step: 1960... Loss: 1.5863... Val Loss: 1.5479
Epoch: 15/20... Step: 1970... Loss: 1.5795... Val Loss: 1.5415
Epoch: 15/20... Step: 1980... Loss: 1.5646... Val Loss: 1.5439
Epoch: 15/20... Step: 1990... Loss: 1.5768... Val Loss: 1.5385
Epoch: 15/20... Step: 2000... Loss: 1.5551... Val Loss: 1.5393
Epoch: 15/20... Step: 2010... Loss: 1.5697... Val Loss: 1.5322
Epoch: 15/20... Step: 2020... Loss: 1.5932... Val Loss: 1.5351
Epoch: 15/20... Step: 2030... Loss: 1.5705... Val Loss: 1.5377
Epoch: 15/20... Step: 2040... Loss: 1.5510... Val Loss: 1.5285
Epoch: 15/20... Step: 2050... Loss: 1.5425... Val Loss: 1.5309
Epoch: 15/20... Step: 2060... Loss: 1.5710... Val Loss: 1.5314
Epoch: 15/20... Step: 2070... Loss: 1.5798... Val Loss: 1.5263
Epoch: 15/20... Step: 2080... Loss: 1.5606... Val Loss: 1.5236
Epoch: 16/20... Step: 2090... Loss: 1.5621... Val Loss: 1.5270
Epoch: 16/20... Step: 2100... Loss: 1.5574... Val Loss: 1.5261
Epoch: 16/20... Step: 2110... Loss: 1.5487... Val Loss: 1.5203
Epoch: 16/20... Step: 2120... Loss: 1.5693... Val Loss: 1.5218
Epoch: 16/20... Step: 2130... Loss: 1.5416... Val Loss: 1.5222
Epoch: 16/20... Step: 2140... Loss: 1.5252... Val Loss: 1.5171
Epoch: 16/20... Step: 2150... Loss: 1.5591... Val Loss: 1.5146
Epoch: 16/20... Step: 2160... Loss: 1.5379... Val Loss: 1.5164
Epoch: 16/20... Step: 2170... Loss: 1.5389... Val Loss: 1.5148
Epoch: 16/20... Step: 2180... Loss: 1.5314... Val Loss: 1.5128
Epoch: 16/20... Step: 2190... Loss: 1.5639... Val Loss: 1.5095
Epoch: 16/20... Step: 2200... Loss: 1.5398... Val Loss: 1.5128
Epoch: 16/20... Step: 2210... Loss: 1.5057... Val Loss: 1.5098
Epoch: 16/20... Step: 2220... Loss: 1.5460... Val Loss: 1.5086
Epoch: 17/20... Step: 2230... Loss: 1.5206... Val Loss: 1.5085
Epoch: 17/20... Step: 2240... Loss: 1.5393... Val Loss: 1.5040
Epoch: 17/20... Step: 2250... Loss: 1.5188... Val Loss: 1.5048
Epoch: 17/20... Step: 2260... Loss: 1.5310... Val Loss: 1.5010
Epoch: 17/20... Step: 2270... Loss: 1.5367... Val Loss: 1.5016
Epoch: 17/20... Step: 2280... Loss: 1.5312... Val Loss: 1.4993
Epoch: 17/20... Step: 2290... Loss: 1.5233... Val Loss: 1.5030
Epoch: 17/20... Step: 2300... Loss: 1.4930... Val Loss: 1.4979
Epoch: 17/20... Step: 2310... Loss: 1.5184... Val Loss: 1.4963
Epoch: 17/20... Step: 2320... Loss: 1.5137... Val Loss: 1.4950
Epoch: 17/20... Step: 2330... Loss: 1.5196... Val Loss: 1.4911
Epoch: 17/20... Step: 2340... Loss: 1.5339... Val Loss: 1.4918
Epoch: 17/20... Step: 2350... Loss: 1.5417... Val Loss: 1.4881
Epoch: 17/20... Step: 2360... Loss: 1.5433... Val Loss: 1.4888
Epoch: 18/20... Step: 2370... Loss: 1.5172... Val Loss: 1.4925
Epoch: 18/20... Step: 2380... Loss: 1.5286... Val Loss: 1.4860
Epoch: 18/20... Step: 2390... Loss: 1.5077... Val Loss: 1.4869
Epoch: 18/20... Step: 2400... Loss: 1.5489... Val Loss: 1.4823
Epoch: 18/20... Step: 2410... Loss: 1.5290... Val Loss: 1.4820
Epoch: 18/20... Step: 2420... Loss: 1.5189... Val Loss: 1.4811
Epoch: 18/20... Step: 2430... Loss: 1.5266... Val Loss: 1.4847
Epoch: 18/20... Step: 2440... Loss: 1.4976... Val Loss: 1.4817
Epoch: 18/20... Step: 2450... Loss: 1.5092... Val Loss: 1.4800
Epoch: 18/20... Step: 2460... Loss: 1.5128... Val Loss: 1.4770
Epoch: 18/20... Step: 2470... Loss: 1.5137... Val Loss: 1.4748
Epoch: 18/20... Step: 2480... Loss: 1.5060... Val Loss: 1.4760
Epoch: 18/20... Step: 2490... Loss: 1.5017... Val Loss: 1.4734
Epoch: 18/20... Step: 2500... Loss: 1.4959... Val Loss: 1.4764
Epoch: 19/20... Step: 2510... Loss: 1.5190... Val Loss: 1.4757
Epoch: 19/20... Step: 2520... Loss: 1.5093... Val Loss: 1.4706
Epoch: 19/20... Step: 2530... Loss: 1.5119... Val Loss: 1.4697
Epoch: 19/20... Step: 2540... Loss: 1.5252... Val Loss: 1.4692
Epoch: 19/20... Step: 2550... Loss: 1.4861... Val Loss: 1.4666
Epoch: 19/20... Step: 2560... Loss: 1.4974... Val Loss: 1.4667
Epoch: 19/20... Step: 2570... Loss: 1.4966... Val Loss: 1.4678
Epoch: 19/20... Step: 2580... Loss: 1.5236... Val Loss: 1.4659
Epoch: 19/20... Step: 2590... Loss: 1.4742... Val Loss: 1.4637
Epoch: 19/20... Step: 2600... Loss: 1.4869... Val Loss: 1.4647
Epoch: 19/20... Step: 2610... Loss: 1.4993... Val Loss: 1.4621
Epoch: 19/20... Step: 2620... Loss: 1.4684... Val Loss: 1.4606
Epoch: 19/20... Step: 2630... Loss: 1.4800... Val Loss: 1.4587
Epoch: 19/20... Step: 2640... Loss: 1.4954... Val Loss: 1.4634
Epoch: 20/20... Step: 2650... Loss: 1.5017... Val Loss: 1.4591
Epoch: 20/20... Step: 2660... Loss: 1.4863... Val Loss: 1.4555
Epoch: 20/20... Step: 2670... Loss: 1.4960... Val Loss: 1.4541
Epoch: 20/20... Step: 2680... Loss: 1.4942... Val Loss: 1.4606
Epoch: 20/20... Step: 2690... Loss: 1.4917... Val Loss: 1.4533
Epoch: 20/20... Step: 2700... Loss: 1.4997... Val Loss: 1.4573
Epoch: 20/20... Step: 2710... Loss: 1.4522... Val Loss: 1.4557
Epoch: 20/20... Step: 2720... Loss: 1.4713... Val Loss: 1.4538
Epoch: 20/20... Step: 2730... Loss: 1.4673... Val Loss: 1.4533
Epoch: 20/20... Step: 2740... Loss: 1.4518... Val Loss: 1.4516
Epoch: 20/20... Step: 2750... Loss: 1.4577... Val Loss: 1.4505
Epoch: 20/20... Step: 2760... Loss: 1.4481... Val Loss: 1.4497
Epoch: 20/20... Step: 2770... Loss: 1.4924... Val Loss: 1.4486
Epoch: 20/20... Step: 2780... Loss: 1.4908... Val Loss: 1.4486

Getting the best model

To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.

Hyperparameters

Here are the hyperparameters for the network.

In defining the model:

  • n_hidden - The number of units in the hidden layers.
  • n_layers - Number of hidden LSTM layers to use.

We assume that dropout probability and learning rate will be kept at the default, in this example.

And in training:

  • batch_size - Number of sequences running through the network in one pass.
  • seq_length - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
  • lr - Learning rate for training

Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.

Tips and Tricks

Monitoring Validation Loss vs. Training Loss

If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:

  • If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
  • If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)

Approximate number of parameters

The two most important parameters that control the model are n_hidden and n_layers. I would advise that you always use n_layers of either 2/3. The n_hidden can be adjusted based on how much data you have. The two important quantities to keep track of here are:

  • The number of parameters in your model. This is printed when you start training.
  • The size of your dataset. 1MB file is approximately 1 million characters.

These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:

  • I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make n_hidden larger.
  • I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.

Best models strategy

The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.

It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.

By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.

Checkpoint

After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.


In [16]:
# Change the name, for saving multiple files
model_name = './models/rnn_x_epoch.net'

checkpoint = {'n_hidden': net.n_hidden,
              'n_layers': net.n_layers,
              'state_dict': net.state_dict(),
              'tokens': net.chars}

with open(model_name, 'wb') as f:
    torch.save(checkpoint, f)

Making Predictions

Now that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!

A note on the predict function

The output of our RNN is from a fully-connected layer and it outputs a distribution of next-character scores.

To actually get the next character, we apply a softmax function, which gives us a probability distribution that we can then sample to predict the next character.

Top K sampling

Our predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about topk, here.


In [17]:
def predict(net, char, h=None, top_k=None):
        ''' Given a character, predict the next character.
            Returns the predicted character and the hidden state.
        '''
        
        # Tensor inputs
        x = np.array([[net.char2int[char]]])
        x = one_hot_encode(x, len(net.chars))
        inputs = torch.from_numpy(x)
        
        if(train_on_gpu):
            inputs = inputs.cuda()
        
        # Detach hidden state from history
        h = tuple([each.data for each in h])
        # Get the output of the model
        out, h = net(inputs, h)

        # Get the character probabilities
        p = F.softmax(out, dim=1).data
        if(train_on_gpu):
            p = p.cpu() # move to cpu
        
        # Get top characters
        if top_k is None:
            top_ch = np.arange(len(net.chars))
        else:
            p, top_ch = p.topk(top_k)
            top_ch = top_ch.numpy().squeeze()
        
        # Select the likely next character with some element of randomness
        p = p.numpy().squeeze()
        char = np.random.choice(top_ch, p=p/p.sum())
        
        # Return the encoded value of the predicted char and the hidden state
        return net.int2char[char], h

Priming and generating text

Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.


In [18]:
def sample(net, size, prime='The', top_k=None):
        
    if(train_on_gpu):
        net.cuda()
    else:
        net.cpu()
    # Eval mode
    net.eval() 
    
    # First off, run through the prime characters
    chars = [ch for ch in prime]
    h = net.init_hidden(1)
    for ch in prime:
        char, h = predict(net, ch, h, top_k=top_k)

    chars.append(char)
    
    # Now pass in the previous character and get a new one
    for ii in range(size):
        char, h = predict(net, chars[-1], h, top_k=top_k)
        chars.append(char)

    return ''.join(chars)

In [19]:
print(sample(net, 1000, prime='Anna', top_k=5))


Anna take his capper. To be a long that with a same
work her as he had answered to his hinds.

"Why all the men, to think well, and I should to be seen a marrear," he said, beginning of she could not
tell that a servace and her
found the post the tone of his shanes of also thoughts of this, he did not her true of its talks of the head of she cloors. "That's
a thougat and carread, with you. We sup of her table?" he was she was a life were a side and hand he fing a sead had been hands. And with a same the husband of that alsain, and he could not be when anything, he was that some of the crued of the poss to the cape of her hand, though the pistical shrange which the conterped and head
at the threat of him, while she was an always wife.
"Here, why though a simes when I don't go to her a chair."

"Oh, and I'm becines on their feeling of the must and any all of his same, the conderion of his bat after, I
see to be to meaning him at his bott."

"All you
was it. There had so the pictles
of souse. 

Loading a checkpoint


In [20]:
# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`
with open('./models/rnn_x_epoch.net', 'rb') as f:
    checkpoint = torch.load(f)
    
loaded = CharRNN(checkpoint['tokens'],
                 n_hidden=checkpoint['n_hidden'],
                 n_layers=checkpoint['n_layers'])
loaded.load_state_dict(checkpoint['state_dict'])

In [21]:
# Sample using a loaded model
print(sample(loaded, 2000,
             top_k=5,
             prime="And Levin said"))


And Levin said in with a much way on the mind and he was
now ansone. At him, and there he said, but the country about the stude, were home and hand to him and to the
farle, and strange, but a commantion he would have been bought true at the talts.

"Who's say, the presaraters. And I don't be something,"
said Stepan Arkadyevitch, sat soon
well, and streaged to him that had been botk a little arm in the sont had
never saw the man had that still.

"Whill you some as the cincustion to his moter, I don't come to make the the man too, as though I have never told her."

"It's seemed it would his fort of shill." He
was a parest of his straich of the contrors. And that he had started at the mut of the secrosal sheer to him.

"Well, I can't get to me of suppoming." And he stopted, and would not think those. She had not have been back, and with
almoment to the saming of subjants. She
was straight, tried in, sat so tending. And he stopped,
and at all the sick simple was hounded that he had a say, as
he had been at impition, he shined that he had trued a tried in his sistings in with the hang, with
a though to be anoting. His heart of the
sole and an offering he had strange to the same one to the matters, he was an offer and that he was a single that he had never so mush at
the moring and all the men had, the child of the
precise and some shear was she saw taking, with their his
and hourd and happened on her and his
hand to be
there her succes of his shade to her.

"Who wishen her," said Stapan Arkadyevitch.

"And you have suppiset..."

"Ahere
she was all hinding all take him, and seet them."

"I seemed to her the peasants when she had at the mother was all that at his
face of that servul, and she was a little," she said.

"You wisher her for my husband into the to streaging to thim was so at that and as it to bo oversime to take it of all sometitious, she say, the servatial
sense what tell you. I have been to
to the tone. Whone's breaken to my stay. I have said he's becouse to the prover and